首页> 外文OA文献 >Adaptive Appearance Model Tracking for Still-to-Video Face Recognition
【2h】

Adaptive Appearance Model Tracking for Still-to-Video Face Recognition

机译:自适应外观模型跟踪,用于从静止图像到视频的面部识别

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Systems for still-to-video face recognition (FR) seek to detect the presence of target individuals based on\udreference facial still images or mug-shots. These systems encounter several challenges in video surveillance\udapplications due to variations in capture conditions (e.g., pose, scale, illumination, blur and expression) and\udto camera inter-operability. Beyond these issues, few reference stills are available during enrollment to\uddesign representative facial models of target individuals. Systems for still-to-video FR must therefore rely on\udadaptation, multiple face representation, or synthetic generation of reference stills to enhance the intra-class\udvariability of face models. Moreover, many FR systems only match high quality faces captured in video,\udwhich further reduces the probability of detecting target individuals. Instead of matching faces captured\udthrough segmentation to reference stills, this paper exploits Adaptive Appearance Model Tracking (AAMT) to\udgradually learn a track-face-model for each individual appearing in the scene. The Sequential Karhunen–\udLoeve technique is used for online learning of these track-face-models within a particle filter-based face\udtracker. Meanwhile, these models are matched over successive frames against the reference still images of\udeach target individual enrolled to the system, and then matching scores are accumulated over several frames\udfor robust spatiotemporal recognition. A target individual is recognized if scores accumulated for a trackface-model\udover a fixed time surpass some decision threshold. The main advantage of AAMT over traditional\udstill-to-video FR systems is the greater diversity of facial representation that may be captured during\udoperations, and this can lead to better discrimination for spatiotemporal recognition. Compared to state-ofthe-art\udadaptive biometric systems, the proposed method selects facial captures to update an individual's\udface model more reliably because it relies on information from tracking. Simulation results obtained with the\udChokepoint video dataset indicate that the proposed method provides a significantly higher level of\udperformance compared state-of-the-art systems when a single reference still per individual is available for\udmatching. This higher level of performance is achieved when the diverse facial appearances that are\udcaptured in video through AAMT correspond to that of reference stills.
机译:静止图像到视频的面部识别(FR)系统试图基于\ udreference面部静止图像或面部照片检测目标个人的存在。由于捕获条件(例如,姿势,比例,照明,模糊和表情)的变化和摄像机与摄像机的互操作性,这些系统在视频监控应用中遇到了若干挑战。除这些问题外,在注册目标人物的代表性面部模型时,几乎没有参考静止图像。因此,用于静态视频FR的系统必须依赖于适应,多张面部表示或参考静止图像的合成生成,以增强面部模型的类内\ udvariability。此外,许多FR系统仅匹配视频中捕获的高质量面部,从而进一步降低了检测目标个人的可能性。本文没有将捕捉的\ udthrough分割的面部与参考静止图像匹配,而是利用自适应外观模型跟踪(AAMT)来\逐渐学习场景中出现的每个人的跟踪面部模型。序列Karhunen– \ udLoeve技术用于在线学习基于粒子过滤器的face \ udtracker中的这些轨迹面部模型。同时,将这些模型在连续帧上与注册到系统中的\ udeach目标个人的参考静止图像进行匹配,然后在几个帧\ ud上累积匹配分数,以实现鲁棒的时空识别。如果在固定时间内累积的轨迹模型的得分超过某个决策阈值,则将识别目标个人。与传统的\从视频到视频的FR系统相比,AAMT的主要优势是在\手术期间可能会捕获更多的面部表情,这可以导致对时空识别的更好区分。与最新的\自适应技术相比,该提议的方法选择面部捕捉来更可靠地更新个人的\ udface模型,因为它依赖于跟踪信息。使用\ udChokepoint视频数据集获得的仿真结果表明,当每个人仍然有单个参考可用于\ udmatching时,与最先进的系统相比,该方法可提供更高水平的\ udperformance。当通过AAMT在视频中捕捉到的多样化面部外观与参考静止图像的面部外观相对应时,可以达到更高的性能水平。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号